Goto

Collaborating Authors

 vanderbilt university


Designing Gaze Analytics for ELA Instruction: A User-Centered Dashboard with Conversational AI Support

Davalos, Eduardo, Zhang, Yike, Jain, Shruti, Srivastava, Namrata, Truong, Trieu, Haque, Nafees-ul, Van, Tristan, Salas, Jorge, McFadden, Sara, Cho, Sun-Joo, Biswas, Gautam, Goodwin, Amanda

arXiv.org Artificial Intelligence

Eye-tracking offers rich insights into student cognition and engagement, but remains underutilized in classroom-facing educational technology due to challenges in data interpretation and accessibility. In this paper, we present the iterative design and evaluation of a gaze-based learning analytics dashboard for English Language Arts (ELA), developed through five studies involving teachers and students. Guided by user-centered design and data storytelling principles, we explored how gaze data can support reflection, formative assessment, and instructional decision-making. Our findings demonstrate that gaze analytics can be approachable and pedagogically valuable when supported by familiar visualizations, layered explanations, and narrative scaffolds. We further show how a conversational agent, powered by a large language model (LLM), can lower cognitive barriers to interpreting gaze data by enabling natural language interactions with multimodal learning analytics. We conclude with design implications for future EdTech systems that aim to integrate novel data modalities in classroom contexts.


Beyond Instructed Tasks: Recognizing In-the-Wild Reading Behaviors in the Classroom Using Eye Tracking

Davalos, Eduardo, Salas, Jorge Alberto, Zhang, Yike, Srivastava, Namrata, Thatigotla, Yashvitha, Gonzales, Abbey, McFadden, Sara, Cho, Sun-Joo, Biswas, Gautam, Goodwin, Amanda

arXiv.org Artificial Intelligence

Understanding reader behaviors such as skimming, deep reading, and scanning is essential for improving educational instruction. While prior eye-tracking studies have trained models to recognize reading behaviors, they often rely on instructed reading tasks, which can alter natural behaviors and limit the applicability of these findings to in-the-wild settings. Additionally, there is a lack of clear definitions for reading behavior archetypes in the literature. We conducted a classroom study to address these issues by collecting instructed and in-the-wild reading data. We developed a mixed-method framework, including a human-driven theoretical model, statistical analyses, and an AI classifier, to differentiate reading behaviors based on their velocity, density, and sequentiality. Our lightweight 2D CNN achieved an F1 score of 0.8 for behavior recognition, providing a robust approach for understanding in-the-wild reading. This work advances our ability to provide detailed behavioral insights to educators, supporting more targeted and effective assessment and instruction.


Scale-up Unlearnable Examples Learning with High-Performance Computing

Zhu, Yanfan, Lyngaas, Issac, Meena, Murali Gopalakrishnan, Koran, Mary Ellen I., Malin, Bradley, Moyer, Daniel, Bao, Shunxing, Kapadia, Anuj, Wang, Xiao, Landman, Bennett, Huo, Yuankai

arXiv.org Artificial Intelligence

Recent advancements in AI models are structured to retain user interactions, which could inadvertently include sensitive healthcare data. In the healthcare field, particularly when radiologists use AI-driven diagnostic tools hosted on online platforms, there is a risk that medical imaging data may be repurposed for future AI training without explicit consent, spotlighting critical privacy and intellectual property concerns around healthcare data usage. Addressing these privacy challenges, a novel approach known as Unlearnable Examples (UEs) has been introduced, aiming to make data unlearnable to deep learning models. A prominent method within this area, called Unlearnable Clustering (UC), has shown improved UE performance with larger batch sizes but was previously limited by computational resources. To push the boundaries of UE performance with theoretically unlimited resources, we scaled up UC learning across various datasets using Distributed Data Parallel (DDP) training on the Summit supercomputer. Our goal was to examine UE efficacy at high-performance computing (HPC) levels to prevent unauthorized learning and enhance data security, particularly exploring the impact of batch size on UE's unlearnability. Utilizing the robust computational capabilities of the Summit, extensive experiments were conducted on diverse datasets such as Pets, MedMNist, Flowers, and Flowers102. Our findings reveal that both overly large and overly small batch sizes can lead to performance instability and affect accuracy. However, the relationship between batch size and unlearnability varied across datasets, highlighting the necessity for tailored batch size strategies to achieve optimal data protection. Our results underscore the critical role of selecting appropriate batch sizes based on the specific characteristics of each dataset to prevent learning and ensure data security in deep learning applications.


Leveraging sinusoidal representation networks to predict fMRI signals from EEG

Li, Yamin, Lou, Ange, Xu, Ziyuan, Wang, Shiyu, Chang, Catie

arXiv.org Artificial Intelligence

In modern neuroscience, functional magnetic resonance imaging (fMRI) has been a crucial and irreplaceable tool that provides a non-invasive window into the dynamics of whole-brain activity. Nevertheless, fMRI is limited by hemodynamic blurring as well as high cost, immobility, and incompatibility with metal implants. Electroencephalography (EEG) is complementary to fMRI and can directly record the cortical electrical activity at high temporal resolution, but has more limited spatial resolution and is unable to recover information about deep subcortical brain structures. The ability to obtain fMRI information from EEG would enable cost-effective, imaging across a wider set of brain regions. Further, beyond augmenting the capabilities of EEG, cross-modality models would facilitate the interpretation of fMRI signals. However, as both EEG and fMRI are high-dimensional and prone to artifacts, it is currently challenging to model fMRI from EEG. To address this challenge, we propose a novel architecture that can predict fMRI signals directly from multi-channel EEG without explicit feature engineering. Our model achieves this by implementing a Sinusoidal Representation Network (SIREN) to learn frequency information in brain dynamics from EEG, which serves as the input to a subsequent encoder-decoder to effectively reconstruct the fMRI signal from a specific brain region. We evaluate our model using a simultaneous EEG-fMRI dataset with 8 subjects and investigate its potential for predicting subcortical fMRI signals. The present results reveal that our model outperforms a recent state-of-the-art model, and indicates the potential of leveraging periodic activation functions in deep neural networks to model functional neuroimaging data.


Want to impress your boss? Praise your colleagues (and yourself)! Scientists claim 'dual promotion' is the key to seeming competent at work

Daily Mail - Science & tech

In the tough world of work we all need to do a little self-promotion now and then. But there's a tough balance to be struck between making our accomplishments known without coming across as unlikeable. Now a study has found the answer: highlight your work-mates' achievements at the same time as you shine a light on your own. Researchers say this'dual promotion' tactic is the perfect way to make sure we are perceived as competent while still radiating'warmth'. 'We show that by simultaneously other-promoting - describing accomplishments and qualities of others - and self-promoting - describing one's own accomplishments and qualities - individuals can project both warmth and competence,' said the researchers.


False Negative/Positive Control for SAM on Noisy Medical Images

Yao, Xing, Liu, Han, Hu, Dewei, Lu, Daiwei, Lou, Ange, Li, Hao, Deng, Ruining, Arenas, Gabriel, Oguz, Baris, Schwartz, Nadav, Byram, Brett C, Oguz, Ipek

arXiv.org Artificial Intelligence

The Segment Anything Model (SAM) is a recently developed all-range foundation model for image segmentation. It can use sparse manual prompts such as bounding boxes to generate pixel-level segmentation in natural images but struggles in medical images such as low-contrast, noisy ultrasound images. We propose a refined test-phase prompt augmentation technique designed to improve SAM's performance in medical image segmentation. The method couples multi-box prompt augmentation and an aleatoric uncertainty-based false-negative (FN) and false-positive (FP) correction (FNPC) strategy. We evaluate the method on two ultrasound datasets and show improvement in SAM's performance and robustness to inaccurate prompts, without the necessity for further training or tuning. Moreover, we present the Single-Slice-to-Volume (SS2V) method, enabling 3D pixel-level segmentation using only the bounding box annotation from a single 2D slice. Our results allow efficient use of SAM in even noisy, low-contrast medical images. The source code will be released soon at: https://github.com/xyimaging/FNPC


Vanderbilt researchers using artificial intelligence to help basketball players improve their shots

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Researchers at Vanderbilt University have developed artificial intelligence technology to potentially assist basketball players in improving their game on the court. Jules White, associate dean for strategic learning programs and associate professor of computer science and computer engineering, and Carlos Olea, a Ph.D. student in the Department of Computer Science, developed an AI software called a temporal relational network to help determine the context and mechanics behind each shot a player takes. "I'm really excited about the potential for AI to help amateurs at home learn and improve," White told Fox News Digital.


AI and Optogenetics Disrupt the Neuroscience of Dopamine

#artificialintelligence

Innovative technologies such as artificial intelligence (AI) machine learning and optogenetics are accelerating discoveries in life sciences, especially in the field of neuroscience. A new breakthrough study published in Current Biology by pioneering brain researchers at Vanderbilt University used optogenetics and AI machine learning to reveal that dopamine is not just a "pleasure molecule" -- a revolutionary finding that may impact how addiction and psychiatric diseases are treated in the future. "Dopamine deficits are seen in patients suffering from substance use disorder," said Erin Calipari, an assistant professor of pharmacology at Vanderbilt University, and faculty member of both the Vanderbilt Brain Institute and the Vanderbilt Center for Addiction Research. "These individuals have reduced dopamine as well as deficits in decision-making that would be explained by our data and new model. These deficits in decision-making are highly correlated with the severity of addiction as well as predicting treatment outcomes. These data are really key to understanding the relationship between dopamine this disease and figuring out how to treat it."


Artificial Intelligence Systems Learn to Teach Each Other

#artificialintelligence

WASHINGTON, DC, October 4, 2021 (ENS) – A new international project is creating advanced artificial intelligence, AI, programs that will enable machines to learn progressively over a lifetime and share those experiences with each other. Uses of this new technology could include co-operating self-learning autonomous vehicles such as self-driving cars, robotic rescue and exploration systems, distributed monitoring systems to detect emergencies, or cybersecurity systems of agents that monitor large networks. Researchers hope the technology will allow machines to reuse information, adapt quickly to new conditions and collaborate by sharing information. The project is part of the initiative Shared-Experience Lifelong Learning, or ShELL, a program funded by the Defense Advanced Research Projects Agency, DARPA. This U.S. government military agency is credited with some of the biggest technological advances in recent history such as the Internet, the miniaturization of GPS, Siri, and the computer mouse.


New Project Hopes to Make Independent AI Systems Learn from Each Other

#artificialintelligence

The aim behind a new international project is to develop advanced AI programs that will allow machines to learn gradually over a lifetime and share that input with each other. Scientists are optimistic that the technology will enable machines to reuse data, adapt rapidly to new conditions and work in partnership by sharing data. The project comes under the initiative known as Shared-Experience Lifelong Learning (ShELL), a program financially supported by the Defense Advanced Research Projects Agency (DARPA) -- a U.S. government agency known for some major technological developments in recent history such as the Internet, Siri, the miniaturization of GPS and the computer mouse. It began this month and is being headed by Dr. Andrea Soltoggio of Loughborough's Computer Science department, in partnership with Dr. Soheil Kolouri at Vanderbilt University and Dr. Cong Liu at the University of Texas at Dallas, both in the United States. The idea behind this project is to gain a deep understanding of how and what an AI system learns when dealing with a new task, so that we can exploit task similarities and share information to create fast, reliable, and collaborating learning agents.